34 research outputs found

    The role of sarcasm in hate speech.A multilingual perspective

    Get PDF

    Killing me Softly: Creative and Cognitive Aspects of Implicitness in Abusive Language Online

    Full text link
    [EN] Abusive language is becoming a problematic issue for our society. The spread of messages that reinforce social and cultural intolerance could have dangerous effects in victims¿ life. State-of-the-art technologies are often effective on detecting explicit forms of abuse, leaving unidentified the utterances with very weak offensive language but a strong hurtful effect. Scholars have advanced theoretical and qualitative observations on specific indirect forms of abusive language that make it hard to be recognized automatically. In this work, we propose a battery of statistical and computational analyses able to support these considerations, with a focus on creative and cognitive aspects of the implicitness, in texts coming from different sources such as social media and news. We experiment with transformers, multi-task learning technique, and a set of linguistic features to reveal the elements involved in the implicit and explicit manifestations of abuses, providing a solid basis for computational applications.Frenda, S.; Patti, V.; Rosso, P. (2022). Killing me Softly: Creative and Cognitive Aspects of Implicitness in Abusive Language Online. Natural Language Engineering. 1-22. https://doi.org/10.1017/S135132492200031612

    Do Linguistic Features Help Deep Learning? The Case of Aggressiveness in Mexican Tweets

    Get PDF
    [EN] In the last years, the control of online user generated content is becoming a priority, because of the increase of online aggressiveness and hate speech legal cases. Considering the complexity and the importance of this issue, this paper presents an approach that combines the deep learning framework with linguistic features for the recognition of aggressiveness in Mexican tweets. This approach has been evaluated relying on a collection of tweets released by the organizers of the shared task about aggressiveness detection in the context of the Ibereval 2018 evaluation campaign. The use of a benchmark corpus allows to compare the results with those obtained by Ibereval 2018 participant systems. However, looking at the achieved results, linguistic features seem not to help the deep learning classification for this task.The work of Simona Frenda and Paolo Rosso was partially funded by the Spanish MINECO under the research project SomEMBED (TIN2015-71147-C2-1-P).Frenda, S.; Banerjee, S.; Rosso, P.; Patti, V. (2020). Do Linguistic Features Help Deep Learning? The Case of Aggressiveness in Mexican Tweets. Computación y Sistemas. 24(2):633-643. https://doi.org/10.13053/CyS-24-2-3398S63364324

    Stance or insults?

    Get PDF

    Online Hate Speech against Women: Automatic Identification of Misogyny and Sexism on Twitter

    Full text link
    [EN] Patriarchal behavior, such as other social habits, has been transferred online, appearing as misogynistic and sexist comments, posts or tweets. This online hate speech against women has serious consequences in real life, and recently, various legal cases have arisen against social platforms that scarcely block the spread of hate messages towards individuals. In this difficult context, this paper presents an approach that is able to detect the two sides of patriarchal behavior, misogyny and sexism, analyzing three collections of English tweets, and obtaining promising results.The work of Simona Frenda and Paolo Rosso was partially funded by the Spanish MINECO under the research project SomEMBED (TIN2015-71147-C2-1-P). We also thank the support of CONACYT-Mexico (project FC-2410).Frenda, S.; Ghanem, B.; Montes-Y-Gómez, M.; Rosso, P. (2019). Online Hate Speech against Women: Automatic Identification of Misogyny and Sexism on Twitter. Journal of Intelligent & Fuzzy Systems. 36(5):4743-4752. https://doi.org/10.3233/JIFS-179023S47434752365Anzovino M. , Fersini E. and Rosso P. , Automatic Identification and Classification of Misogynistic Language on Twitter, Proc 23rd International Conference on Applications of Natural Language to Information Systems, NLDB-2018, Springer-Verlag, LNCS 10859, 2018, pp. 57–64.Burnap P. and Williams M.L. , Hate speech, machine classification and statistical modelling of information flows on Twitter: Interpretation and communication for policy decision making, Internet, Policy and Politics, Oxford, UK, 2014.Burnap, P., Rana, O. F., Avis, N., Williams, M., Housley, W., Edwards, A., … Sloan, L. (2015). Detecting tension in online communities with computational Twitter analysis. Technological Forecasting and Social Change, 95, 96-108. doi:10.1016/j.techfore.2013.04.013Chen Y. , Zhou Y. , Zhu S. and Xu H. , Detecting offensive language in social media to protect adolescent online safety, Privacy, Security, Risk and Trust (PASSAT), 2012 International Conference on and 2012 International Conference on Social Computing (SocialCom), Amsterdam, Netherlands, IEEE, 2012, pp. 71–80.Escalante, H. J., Villatoro-Tello, E., Garza, S. E., López-Monroy, A. P., Montes-y-Gómez, M., & Villaseñor-Pineda, L. (2017). Early detection of deception and aggressiveness using profile-based representations. Expert Systems with Applications, 89, 99-111. doi:10.1016/j.eswa.2017.07.040Fersini E. , Anzovino M. and Rosso P. , Overview of the Task on Automatic Misogyny Identification at IBEREVAL, CEUR Workshop Proceedings 2150, Seville, Spain, 2018.Fersini E. , Nozza D. and Rosso P. , Overview of the Evalita 2018 Task on Automatic Misogyny Identification (AMI), Proceedings of the 6th evaluation campaign of Natural Language Processing and Speech tools for Italian (EVALITA’18), Turin, Italy, 2018.Fox, J., & Tang, W. Y. (2014). Sexism in online video games: The role of conformity to masculine norms and social dominance orientation. Computers in Human Behavior, 33, 314-320. doi:10.1016/j.chb.2013.07.014Fulper R. , Ciampaglia G.L. , Ferrara E. , Ahn Y. , Flammini A. , Menczer F. , Lewis B. and Rowe K. , Misogynistic language on Twitter and sexual violence, Proceedings of the ACM Web Science Workshop on Computational Approaches to Social Modeling (ChASM), 2014.Gambäck B. and Sikdar U.K. , Using convolutional neural networks to classify hate-speech, Proceedings of the First Workshop on Abusive Language Online 2017.Hewitt, S., Tiropanis, T., & Bokhove, C. (2016). The problem of identifying misogynist language on Twitter (and other online social spaces). Proceedings of the 8th ACM Conference on Web Science. doi:10.1145/2908131.2908183Justo R. , Corcoran T. , Lukin S.M. , Walker M. and Torres M.I. , Extracting relevant knowledge for the detection of sarcasm and nastiness in the social web, Knowledge-Based Systems, 2014.Lapidot-Lefler, N., & Barak, A. (2012). Effects of anonymity, invisibility, and lack of eye-contact on toxic online disinhibition. Computers in Human Behavior, 28(2), 434-443. doi:10.1016/j.chb.2011.10.014Nobata C. , Tetreault J. , Thomas A. , Mehdad Y. and Chang Y. , Abusive language detection in online user content, Proceedings of the 25th International Conference on World Wide Web, Geneva, Switzerland, 2016, pp. 145–153.Poland, B. (2016). Haters. doi:10.2307/j.ctt1fq9wdpSamghabadi N.S. , Maharjan S. , Sprague A. , Diaz-Sprague R. and Solorio T. , Detecting nastiness in social media, Proceedings of the First Workshop on Abusive Language Online, Vancouver, Canada, 2017, pp. 63–72. Association for Computational Linguistics.Sood, S., Antin, J., & Churchill, E. (2012). Profanity use in online communities. Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems - CHI ’12. doi:10.1145/2207676.220861
    corecore